† Corresponding author. E-mail:
Project supported by the National Natural Science Foundation of China (Grant Nos. 21190040, 91430217, and 11305176).
Cognitive behaviors are determined by underlying neural networks. Many brain functions, such as learning and memory, have been successfully described by attractor dynamics. For decision making in the brain, a quantitative description of global attractor landscapes has not yet been completely given. Here, we developed a theoretical framework to quantify the landscape associated with the steady state probability distributions and associated steady state curl flux, measuring the degree of non-equilibrium through the degree of detailed balance breaking for decision making. We quantified the decision-making processes with optimal paths from the undecided attractor states to the decided attractor states, which are identified as basins of attractions, on the landscape. Both landscape and flux determine the kinetic paths and speed. The kinetics and global stability of decision making are explored by quantifying the landscape topography through the barrier heights and the mean first passage time. Our theoretical predictions are in agreement with experimental observations: more errors occur under time pressure. We quantitatively explored two mechanisms of the speed-accuracy tradeoff with speed emphasis and further uncovered the tradeoffs among speed, accuracy, and energy cost. Our results imply that there is an optimal balance among speed, accuracy, and the energy cost in decision making. We uncovered the possible mechanisms of changes of mind and how mind changes improve performance in decision processes. Our landscape approach can help facilitate an understanding of the underlying physical mechanisms of cognitive processes and identify the key factors in the corresponding neural networks.
The neural circuit as a dynamical system is the basis of cognitive function, including decision making.[1–4] We face different choices in our daily lives. Most decisions are made unconsciously. Sometimes we must first evaluate the costs and benefits of addressing the situations of risk or uncertainty before we make an optimal decision, choosing from a set of alternatives.[5–7] Understanding the mechanisms of decision making has been challenging. In recent years, researchers have made progress in both theoretical and experimental fields related to cognitive processes, such as decision making.[8–11]
The diffusion model was successful in describing behavior responses such as accuracy and the shape of response-time distributions in two-choice decision making tasks.[8,12] In the diffusion model, a decision variable evolves from a starting point until it reaches one of the two response boundaries corresponding to the two decisions, as the result of fluctuations. Although the diffusion model gives a good account of behavior data, some features observed in the experiments cannot be easily captured in the diffusion model. For example, the experimental observation that longer response times in error trials than in correct trails cannot be fitted by the diffusion model unless it is assumed that parameters of the model (the initial condition and the drift rate) vary across trials. In the delayed visual motion discrimination task, the monkey is required to hold the decision in the working memory for a few seconds.[13] The working memory, which is needed to account for the delayed response in this task cannot be easily explained in the diffusion model.
However, these features can be naturally explained in the biophysically motivated attractor model of a two-choice decision-making task,[14–16] where a decision is made when the system is attracted to one decided attractor associated with one of the two choices. The attractor dynamics have been shown to be successful in describing biological and cognitive processes.[3,17–23] In the attractor model, the working memory can be stored in the decided attractor even after the stimulus is removed. The dynamics of the attractor model are dominated by an attractor landscape in which the states correspond to activities of neural populations. Both behavioral and neurophysiological data in the decision-making process can be well described by such a model.[14–16] However, the attractor landscape introduced provides a qualitative concept, and further quantification is required.
Based on the advantages of the above models, Roxin and Ledberg made progress in relating the diffusion model to the neuronal activities by suggesting a general method to reduce multi-dimensional biophysical models to a one-dimensional nonlinear diffusion model.[11] This one-dimensional nonlinear diffusion model also provides an excellent fit to behavioral data in the two-choice decision-making tasks. Furthermore, an analytical form of energy function can be constructed, with a negative gradient serving as the driving force of the system. Although great efforts have been made to relate the diffusion model to biophysically realistic neural circuits, the detailed information of the complicated dynamical system consisting of a great number of neurons cannot be easily obtained from the one-dimensional diffusion model. The main purpose of our work is to explore the underlying mechanisms of the cognitive processes of decision making from the physical and quantitative perspectives.
In general, biophysics-based decision-making models can show more neurophysiological information such as the evolution of the average firing rates of neural populations and the sources of fluctuations.[14–16] Parameters in the model are directly related to biologically meaningful quantities, which provides us with a chance to explore the neuronal mechanisms of decision-making systems. However, the attractor landscape in the attractor model is not completely quantified. Although the position of each attractor state is shown, the relative weights of these states and the quantified attractor landscape are not given.[14–16] Quantifying the topology of the attractor landscape through the relative weights of states can account for the stability of the global system, particularly for the functional attractors. Previous works have shown better performances in easier decision tasks.[15,16] The underlying mechanisms determining the speed, accuracy, and changes of mind in the attractor model remain challenging to quantify. Furthermore, what should be noticed is that the realistic neural networks are always non-equilibrium systems due to the material, energy, and information exchanges with the environment. For the general non-equilibrium dynamical systems, the driving force of the dynamics cannot be written as a pure gradient of an energy landscape.[20–23]
In our previous works, we have developed a landscape and flux theory for general non-equilibrium dynamical systems.[20–25] The potential landscape we constructed is closely related to the steady-state probability distributions of the non-equilibrium systems. The probability density distributions of neuronal activity variables have been studied to explore the influences of fluctuations on decision-making behavior.[26,27] We found that the dynamics of the non-equilibrium systems are determined by both the underlying landscape and the curl probability flux. The flux, in addition to the gradient of the non-equilibrium landscape, is responsible for many characteristic non-equilibrium behaviors. For example, our previous studies showed that the flux provides the main driving force of oscillatory behaviors in neural networks.[23] Furthermore, as the result of the flux, the dominant kinetics path does not necessarily pass through the landscape saddles (local maximum) to get to the local minimum, and the forward and backward paths are irreversible.[22,24] Furthermore, we established the non-equilibrium thermodynamics, extending the equilibrium thermodynamics based on the landscape and flux theory.[23,24,28,30] The entropy production rate in the non-equilibrium systems is linked to the flux term, which originates from the energy pump from the environments.[29–31] Quantifying the entropy production rate or energy dissipation rate gives us new insights into the energy cost in non-equilibrium biological systems.
In this work, we applied the landscape and flux theory to a biophysics-based model to quantitatively investigate the nature of the decision-making process from a physical perspective. We quantified basins of attraction as the fates of decision-making with higher probabilities. The stability of basins of attraction corresponding to functional states was explored by quantifying the underlying potential landscape topography through the barrier heights and the kinetic transition times characterized by the mean first passage time between the basins of attraction. We also quantified the optimal path from the undecided state to the decided state with a path integral approach.[32] We found that both the landscape and flux determine the dynamical processes and the associated speed. Furthermore, we explored how the potential landscapes are influenced by the changes of the key factors of the underlying neural network. The underlying mechanism of the speed-accuracy tradeoff was quantitatively explored by varying the additional stimulus input and input threshold in this study, and both of them increase the baseline activity of the integrator neurons. Differently from previous works, here our quantifications of the decision time and accuracy performance avoid time-consuming calculations from the statistics of the data. Furthermore, we quantified the energy costs and explored the tradeoffs among speed, accuracy, and energy cost in decision-making. We found speed emphasis will cost more energy in unit time. However, a varying input threshold and additional input play different roles in regulating the total energy cost, which is defined as the entropy production rate multiplied by the decision time. If the input threshold is the main regulation mechanism, we showed that the total energy cost increases monotonously as the accuracy increases, and it decreases monotonously as the decision time decreases. When presenting additional stimulus input is the dominated mechanism, the total energy cost in decision making changes non-monotonically as the additional input increases. Reasonable but suboptimal accuracy and performance can be achieved with optimal energy cost and speed. In other words, for decision making in this case, there is an optimal energy cost with a nearly optimal fast speed at an intermediate accuracy. We also explored the mechanism of changes of mind, and we suggested a physical explanation of the interesting phenomenon observed in the experiments. Above all, the novelty of our work lies in the global quantification of the dynamics and identifications of the driving force (landscape and curl flux), thermodynamics (energy cost), the underlying mechanisms of decision-making (processes quantified by the optimal paths and the speed of decision-making) and quantifications of the relations of speed, accuracy, energy cost, and changes of mind.
In realistic biological systems, there are always intrinsic and extrinsic fluctuations.[33] For neural circuits, fluctuating components of the inputs outside the circuits give the external fluctuations, and the statistical fluctuations within the circuit give the intrinsic fluctuations. Therefore, we should take the fluctuations into consideration when studying the dynamics of neural network systems. The underlying dynamics of neural network systems are generally non-linear and unpredictable, and chaos can emerge. The conventional way of exploring neural network dynamical systems by following the single trajectories of the time evolution of the system cannot easily capture the global properties of the systems. However, the probabilistic evolution is often linear and predictable, which can shape the global nature of the stochastic dynamics. Therefore, we focus on the probabilistic evolution of the system by solving the corresponding Fokker–Planck equations. We can obtain the steady state probability distribution Pss, which satisfies ∂Pss/∂t = 0. Then, the probabilistic landscape can be quantified as U = −lnPss, where U is the quantified potential landscape and Pss is the steady-state probability distribution.[20–23] Dynamical systems will be expected to be attracted to some states with higher probabilities and lower potentials in the temporal evolution processes because the weight of each state is inhomogeneously distributed in state space. The states with the locally highest probability represent attractor states which are typically related to biological functions.
Cognitive functions are achieved by the collective efforts of neural circuits rather than individual neurons. The potential landscape can provide a quantitative description of the global nature of neural networks rather than the local information from single trajectories.[20–23] To quantify the potential landscapes, we focus on the steady-state probability distributions. Here the noise term should be taken into account to capture the statistical properties. Because we start with the dynamical equations of the neural network, we can write the corresponding Langevin equations as: zdxi/dt =
We have introduced our non-equilibrium landscape and flux theory for general networks. Here we applied this theory to a simplified biophysics-based model that could account for the experimental results in decision-making processes.[15,16] This simplified model is a reduced version of the neural network model with thousands of spiking neurons that interact with each other. With a mean-field approach, the dynamics of a neural population can be represented by a single unit. The mean activity (firing rate) of a neural population depends on the synaptic input currents. The input current is a function of synaptic gating variables, which represents the fraction of the activated synaptic conductance. Furthermore, a third inhibitory neural population through which the two excitatory neural pools inhibit each other is neglected for simplicity. Then the dynamics of the decision-making neural network model can be represented by the dynamics of two excitatory neural populations. The details of model reduction can be found in a previous paper.[15]
As shown in Fig.
According to the above equations, the average gating variables Si at the steady state are positively correlated with the average firing rates of neural population 1,2. Therefore, the gating variables Si also reflect the mean activity of the neural population. The total synaptic input currents of the two neural populations dominated by the NMDA receptors are
Random dot motion (RDM) tasks were designed to study decision-making behavior.[13,35–37] Monkeys are asked to watch the random dot motion and make a decision by saccadic eye movement. Here we use Imotion,i as the external sensory input to selective neural population i corresponding to the random dot stimulus current in RDM tasks. It can be written as: Imotion,i = JA,extμ0(1 ± c′/100%), where JA,ext = 5.2 × 10−4 nA·Hz−1 is the external average synaptic coupling with the AMPA receptors.[15] The + or − sign refers to whether the motion direction is the preferred one or not the preferred one of the neural pool. In the schematic diagram Fig.
In the original model,[15,16] the stimulus strength μ0 is set to 30 Hz when the stimulus is presented. In this work, we discussed the system with different stimulus strength μ0 and motion coherence c′, and the detailed values are shown in the corresponding figure captions. We also found that the value of parameter b/a plays the role of input threshold in this dynamical neural model, which is represented by Thin in this paper. We will call it the input threshold because the average activity of the selective neural population is very low when the corresponding stimulus input is below Thin. Once the input is beyond this threshold, the activity of the neural population increases significantly. Due to the great influences of parameters a and b on the dynamics of the decision-making model, we primarily discuss the corresponding details in the results and discussion section. The detailed values of parameters a and b are shown in the figure captions and text. The values of the rest of the parameters are set as shown above.
Here, we quantitatively uncovered such probabilistic landscapes from the underlying dynamics to explore the global properties of decision-making neural networks. The global stability of the neural circuits, which determines the difficulty of making decisions and changes of decisions can be explored by quantifying the probability landscape topography and the kinetics of state switchings. The population activity (firing rate) ri of selective excitatory population i is a monotonically increasing function of the corresponding average gating variable Si,[15,16] which means that a larger Si indicates higher activities of neural population i. Therefore, quantifying the landscape of the network in the state plane of (S1, S2) is better for global understanding the dynamical process of decision making. Because we know the dynamical equations of the decision-making neural network, we can write the corresponding Langevin equations as: dIi,tot/dt =
Although attractor landscapes have been introduced to describe the dynamics of decision-making neural networks, such landscapes still need to be further quantified. As we discussed in the method section, for a given dynamical neural system, we can obtain the temporal evolution of the probability distribution in the state space by solving the corresponding Fokker–Plank diffusion equation. Furthermore, we can quantify the potential landscape as U = −ln(Pss(
Before stimulus onset (stimulus input strength μ0 = 0 Hz), three stable attractors coexist in Fig.
The difficulty of this random-dot motion direction discrimination task varies depending on motion coherence c′. When coherence c′ = 0, two selective neural groups receive the same stimulus. Coherence c′ = 1 indicates that there is only one group receiving a stimulus. We can see in Figs.
Figure
The advantage of our approach over the previous studies[15,16] is that we not only quantify the relative weights of states on the landscape but also identify the optimal kinetic paths for the decision-making process. With a path integral approach,[22] we can quantify the weights of paths between each pair of states. The quantified optimal decision path has the highest probability between the undecided state and each of the two decided states. Furthermore, in our method, the way we quantify the decision path avoids time-consuming numerical calculations to obtain statistically meaningful data by averaging many trials in previous approaches. The details of the path integral approach are shown in the Appendices. As shown in Fig.
Our quantified potential landscapes show the global properties of the neural network in the course of decision making. In previous studies,[15,16] the information on the positions of the attractors can be obtained. However, the quantitative information on the weight of each state of the multidimensional model and therefore the corresponding landscape were not known. An advantage of our landscape approach is that it provides a way to quantify the stability of the functional states though the quantification of their weights. The stability of attractors is very important for decision-making networks. For example, if the spontaneous state is not stable before the stimulus onset, more errors occur. If the decided state is not stable after the stimulus offset, the decision can be easily changed by small fluctuations. This also may result in that the decision cannot be held long enough to produce a timely response, i.e., a saccadic motor response in a visual motion direction discrimination task.
In this study we use the topography of the underlying landscape through the barrier height between basins of attraction to quantify the stability of stable states. Here, the barrier height is defined as Usaddle − Umin, Umin is the potential minimum of one local stable state and Usaddle is the potential at the saddle point between two stable states. In addition to barrier heights, we also quantified the mean first passage time (MFPT) from one stable state to another to describe the stability of stable attractor states (see Appendixes for methods to acquire the MFPT). It turns out that the mean first passage time is closely related to the barrier height. It will take a longer time to escape the attractor with a higher barrier.
First, we showed the landscapes at different fluctuation levels before the stimulus inputs onset in Figs.
The central undecided attractor disappears very quickly when the stimulus input strength increases. Therefore, to quantify the stability of different function states, we show here the barrier heights and the corresponding MFPT versus the varying input for a short range, in which the stimulus strength μ0 varies from 0 to 10 Hz. As shown in Figs.
The time taken in the process of decision making is always an issue of concern. Another benefit of quantifying the mean first passage time is that we can quantitatively explore the decision-making speed or time (from the undecided state to a decided state) under different biological conditions. As shown in previous research, when the motion coherence increases (the decision task becomes easier), the decision time to make the correct choice decreases monotonously and the decision time in error trials is always longer.[15,36,38] Here, we obtain similar results by solving the corresponding MFPT. As shown in Fig.
When people make decisions, they often face opposite demands in speed and accuracy. How this speed-accuracy tradeoff (SAT) is implemented in neural decision-making circuits has received attention in recent years.[41–43] In previous models of the decision-making network, the activities of decision-making neurons increase gradually as the stimulus input is presented. Once the activities reach the decision threshold, the decision is made. The decision-making neurons can be seen as integrators of stimulus input information. Mathematically speaking, an increase in the initial activities of integrator neurons (baseline) and the reduction of the decision threshold seem to be equivalent for speed emphasis because both ways shorten the process of information accumulation. Meanwhile, decisions are less accurate because the decision-making process can be more easily affected by the fluctuations due to the shortening distance between the baseline and the decision threshold. Many modeling studies have implied a lower decision threshold as the mechanism for speed emphasis.[44,45] However, recent human brain-imaging studies and neurophysiological recordings provide strong evidence for the changing-baseline hypotheses.[43,46,47] The corresponding intrinsic mechanisms of the speed-accuracy tradeoff in the attractor model need to be quantified. The advantage of quantifying the potential landscape is that it can directly and quantitatively reflect the influences of varying parameters with specifically biological meanings on the landscape, which can help us to uncover the mechanisms of the decision-making process.
Based on the changing-baseline hypothesis, theories are proposed to explain how the speed-accuracy tradeoff is controlled in the cortical-basal ganglia circuit.[43] The cortical theory suggests that cortical decision-making integrator neurons receive additional excitatory input (the baseline is increased) with speed emphasis. Previous theoretical studies support this theory with the one-dimensional non-linear diffusion model.[11] In our attractor landscape, presenting additional inputs to selective decision-making integrator neurons can be described as the decision process initiated at state c′ instead of state c in Fig.
Differently from the cortical theory, the striatal theory suggests that with speed emphasis, the striatum receives excitatory input from cortical (non-integrator) neurons, increasing striatal activity and thus decreasing the inhibitory control of the basal ganglia over the brain.[43] Because of the strong functional projections from the basal ganglia back to the cortex via the thalamus, the striatal theory also predicts that the baseline of the cortical decision-making integrator neurons is increased with speed emphasis. As a result, the decisions are fast but error-prone. How can this mechanism be quantitatively explored with our landscape approach? To address this issue, we focused on the input–output function of the selective integrator neurons in the mathematical model. As we introduced in the model section, we defined Thin = b/a as the input threshold of the decision-making neural network model. We found the activities of decision-making integrator neurons are very low when the corresponding stimulus input is below Thin. Once the input is beyond this threshold, the activity of the neural population increases quickly.
Lower input threshold Thin indicates that the selective neurons can be more effectively activated by the same stimulus inputs. In other words, the stimulus inputs become more effective. Therefore, the effects of decreasing input threshold Thin here are equivalent to less inhibitory control in the decision-making integrator neurons, which is consistent with what the striatal theory suggests. According to the input–output function of integrator neurons, a lower Thin increases the baseline. In Fig.
Furthermore, we quantitatively explored how the input threshold affects the performance and decision time of the decision-making processes with the landscape approach. In Figs.
In addition to speed and accuracy, the energy cost is another focus of attention in the decision-making process. Both receiving additional excitatory input and lowering input threshold Thin increase the baseline and play similar roles in the speed–accuracy tradeoff with speed emphasis. However, whether two mechanisms show similar natures when the energy cost is taken into consideration remains to be addressed. It is expected that speed emphasis may cost more energy with faster speed from intuition. To test this prediction, we calculated the entropy production rate as a measurement of energy dissipation per unit time[28] (see details in Appendixes). We found the total entropy production rate (the change in the entropy inside the system adds to the entropy flow rate from the environments) is always larger or equal to zero. It has the physical meaning of the energy cost or dissipation rate (the temperature is regarded as a constant for simplicity). For a given non-equilibrium dynamical system (all parameters are set), there will always be a certain amount of energy dissipated in unit time (measured by the entropy production). The entropy production rate is closely related to the probability flux. The flux is the origin of the entropy production. Therefore, quantifying the flux of the decision-making network provides us with a chance to explore the energy cost in the decision-making processes.
Figures
Let us focus now on the speed–accuracy tradeoff mechanism by varying input threshold Thin, as the striatal theory suggests. As shown in Fig.
In addition, focusing on the speed–accuracy tradeoff mechanism proposed in the cortical theory, we find that the energy cost does not change monotonously when the cortical decision-making integrator neurons receive larger additional input. As shown in Fig.
In summary, we have quantitatively discussed the cortical theory and the striatal theory of the speed–accuracy tradeoff in decision making. If the energy cost is taken into consideration, there should be a speed–accuracy–energy tradeoff. With speed emphasis, although the accuracy is sacrificed, the energy cost may be minimized. Our results imply there is an optimal balance among speed, accuracy, and the energy cost with a varying additional input. They may also serve as a basis for the optimal design of decision-making with speed, accuracy, and energy cost. The fMRI study supports our predictions that more energy costs in unit time with speed emphasis, but the total energy cost in the whole decision process still needs to be confirmed in future experiments.
In our daily lives, making decisions is often accompanied by situations in which we change our minds. Usually, such changes can lead to the correction of initial errors. An additional strong opposite stimulus may result in decision reversal because the corresponding attractor landscape reverses. However, a decision reversal sometimes happens without adding an input with an opposite direction. Here, the changes of mind we discussed refer to making different choices under no changes in the direction of the stimulus inputs (random dots motion coherence c′) after an initial decision has been made.
Without an opposite input, the two attractors of the decided states do not reverse. We obtain some insights from the recent research findings: when strong inputs are presented to both selective decision-making neural populations, a new state emerges.[15,44] From Fig.
Figures
According to our potential landscape theory, we can provide a physical explanation of other experimental findings in changes of mind. Previous works show that with increasing coherence, the probability of changes to the wrong choice from the correct one decrease monotonically.[39,44,45] In the meantime, changes to the correct choice peak at intermediate motion strength and then decrease gradually. First, we need to sum up the whole process of changes of mind in three necessary steps: making the initial choice, then being attracted to the new basin of attraction or, in other words, getting to a “double-up” state (high activities for both groups of neurons, the central basin) and, at last, making a different choice. A change of mind requires that process. Then these experimental findings can be easily understood with our potential landscape theory.
Although changes from the wrong to the correct choice are always more frequent than changes from the correct to the wrong choice, the trends of the probabilities of the two types of changes versus the motion coherence are slightly different. We have shown that the network is more likely to make a correct decision at a higher coherence level. Compared with the landscape shown in Fig.
We also used a path integral method to quantify the weights of the paths of changes of mind to give a quantitative explanation of the experimental observations in changes of mind.[22] We can compare the probabilities of two paths in changes of mind for certain coherence c′. In Fig.
We believe the change of mind is an intrinsic mechanism of improving performance in decision-making tasks. Initial correct decisions are more likely to be kept. For error trials, changes of mind give the neural network a second chance to make a new choice; thus, many errors will be corrected. We know that a longer reaction time is important for the performance (accuracy) of decision making. There is also evidence showing that when subjects were asked to perform decision tasks more slowly, there were fewer changes of mind and more accurate initial decisions.[39] We have discussed the mechanism of the speed–accuracy tradeoff in decision making from the landscape perspective. Here, we also explored the effects of input threshold Thin on the potential landscapes during the process of changes of mind. In Fig.
Lower input threshold Thin reduces the difficulty in making a choice, both correct and incorrect. It also makes the new central attractor (“double-up” state) stronger for more changes. Therefore, we can conclude although the time pressure may lead to more initial errors, there will be more changes made to correct these errors if there are large enough inputs presented. The speed–accuracy tradeoff always works. Fortunately, the mechanism of changes of mind guarantees the reasonable performance of decision making with speed emphasis as long as people have the chance to make the changes. We have discussed the speed–accuracy–energy tradeoff. It seems that not focusing on accuracy may save energy. The conclusion is not clear if we take changes of mind into consideration. A faster speed may result in more initial errors. Once the initial error is corrected by changes of mind, an additional energy cost is required. As shown in Fig.
Although the attractor ideas have been used to describe cognitive processes such as decision making, a quantitative description of attractor landscapes has not been completely given yet, particularly at the non-equilibrium level. Here, we developed a theoretical framework that has been shown to be successful in describing associative memory and circadian rhythm,[23] to quantify the landscape for decision making. Furthermore, we quantified the decision-making processes by the optimal paths from undecided attractor states to decided attractor states on the landscape. An advantage of our landscape theory is not only showing the locations of attractors, but also showing the weight or depth of the attractors. We can quantify the stability of the functional attractors with the corresponding escape time and barrier heights on the potential landscape. The escape time quantified by the mean first passage time MFPT can also be used to quantify the decision times important in decision-making processes. Differently from the traditional way to get reaction time and performance, here, our quantifications of these features avoid the time-consuming calculations of the statistics of the data. Based on the potential landscapes and the optimal paths, we provided quantitative explanations of the experimentally observed behaviors and explored the underlying mechanisms of the speed–accuracy–energy tradeoff and changes of mind in the decision processes.
Following the neural trajectories, previous research showed the correlation between the decision time and accuracy for decision tasks with varying difficulty (c′).[15,16] However, the underlying intrinsic mechanisms of the tradeoffs among speed, accuracy, and particularly the energy cost in decision making have not been explored. Recent fMRI studies on the speed–accuracy tradeoff strongly suggest the brain implements speed emphasis through increasing the baseline activity of cortical integrator neurons. We quantitatively discussed two mechanisms in speed emphasis with our landscape approach: one is increasing the baseline directly through presenting additional excitatory input (cortical theory) and the other one is increasing the baseline indirectly by reducing the inhibitory control to the integrator neurons (striatal theory). Our results suggest that these two ways show similar properties in the speed–accuracy tradeoff; i.e., the speed is increased at the expense of lost accuracy and higher energy cost in unit time. However, our results predict the two theories can be distinguished by the total energy cost in the whole decision-making process. If varying the input threshold (striatal theory) is the main regulation mechanism in the speed–accuracy tradeoff, the energy cost increases monotonously with increasing accuracy and decreases monotonously with a decreasing decision time. When presenting additional input is the dominated mechanism of regulating the speed–accuracy tradeoff, the total energy cost does not change monotonically with an increasing additional input. There is an optimal total energy cost with near optimal fast speed at intermediate accuracy. In other words, reasonable but suboptimal accuracy and performance can be achieved with optimal energy cost and speed. Some of our predictions have been supported by the experimental recordings that showed that the BOLD signal increases in faster decisions. The total energy cost in the whole decision process should be tested in future experiments, which might also help in the optimal design of decision-making with speed, accuracy, and energy cost. It should be noted that we used a reduced two-population model to describe the decision-making process. When adding more biological details to the neural work, such as basal ganglia and globus pallidus, which are associated with movements, we may uncover more detailed mechanisms in decision making through quantifying the corresponding potential landscape and kinetic paths.
We explored the mechanism of mind changes with our potential landscape approach and found that it may be closely associated with the new state that emerges when the large stimulus inputs are presented. We also gave a physical explanation of why changes from the wrong to the correct choice occur more frequently and why more changes occur at a low coherence level, as has been observed in the experiments. We found that although errors are more likely to be made for a shorter decision time, there will be more chances for changes to correct these errors.
Our approach provides a general way to investigate cognitive behaviors which are determined by neural networks and the corresponding mechanisms. We wish to apply our theory to more complicated systems in future studies, for example decision makings with multiple alternatives and decisions associated with memory retrieval.
1 | |
2 | |
3 | |
4 | |
5 | |
6 | |
7 | |
8 | |
9 | |
10 | |
11 | |
12 | |
13 | |
14 | |
15 | |
16 | |
17 | |
18 | |
19 | |
20 | |
21 | |
22 | |
23 | |
24 | |
25 | |
26 | |
27 | |
28 | |
29 | |
30 | |
31 | |
32 | |
33 | |
34 | |
35 | |
36 | |
37 | |
38 | |
39 | |
40 | |
41 | |
42 | |
43 | |
44 | |
45 | |
46 | |
47 | |
48 | |
49 | |
50 | |
51 | |
52 |